翻訳と辞書
Words near each other
・ Bayesian classifier
・ Bayesian cognitive science
・ Bayesian econometrics
・ Bayesian efficiency
・ Bayesian experimental design
・ Bayesian filtering
・ Bayesian Filtering Library
・ Bayesian game
・ Bayesian hierarchical modeling
・ Bayesian inference
・ Bayesian inference in marketing
・ Bayesian inference in motor learning
・ Bayesian inference in phylogeny
・ Bayesian inference using Gibbs sampling
・ Bayesian information criterion
Bayesian interpretation of kernel regularization
・ Bayesian Knowledge Tracing
・ Bayesian linear regression
・ Bayesian multivariate linear regression
・ Bayesian network
・ Bayesian Operational Modal Analysis
・ Bayesian optimization
・ Bayesian poisoning
・ Bayesian probability
・ Bayesian programming
・ Bayesian search theory
・ Bayesian statistics
・ Bayesian tool for methylation analysis
・ Bayesian vector autoregression
・ Bayet


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Bayesian interpretation of kernel regularization : ウィキペディア英語版
Bayesian interpretation of kernel regularization

In machine learning, kernel methods arise from the assumption of an inner product space or similarity structure on inputs. For some such methods, such as support vector machines (SVMs), the original formulation and its regularization were not Bayesian in nature. It is helpful to understand them from a Bayesian perspective. Because the kernels are not necessarily positive semidefinite, the underlying structure may not be inner product spaces, but instead more general reproducing kernel Hilbert spaces. In Bayesian probability kernel methods are a key component of Gaussian processes, where the kernel function is known as the covariance function. Kernel methods have traditionally been used in supervised learning problems where the ''input space'' is usually a ''space of vectors'' while the ''output space'' is a ''space of scalars''. More recently these methods have been extended to problems that deal with multiple outputs such as in multi-task learning.
In this article we analyze the connections between the regularization and the Bayesian point of view for kernel methods in the case of scalar outputs. A mathematical equivalence between the regularization and the Bayesian point of view is easily proved in cases where the reproducing kernel Hilbert space is ''finite-dimensional''. The infinite-dimensional case raises subtle mathematical issues; we will consider here the finite-dimensional case. We start with a brief review of the main ideas underlying kernel methods for scalar learning, and briefly introduce the concepts of regularization and Gaussian processes. We then show how both points of view arrive at essentially equivalent estimators, and show the connection that ties them together.
==The Supervised Learning Problem==

The classical supervised learning problem requires estimating the output for some new input point \mathbf' by learning a scalar-valued estimator \hat(\mathbf') on the basis of a training set S consisting of n input-output pairs, S = (\mathbf,\mathbf) = (\mathbf_1,y_1),\ldots,(\mathbf_n,y_n). Given a symmetric and positive bivariate function k(\cdot,\cdot) called a ''kernel'', one of the most popular estimators in machine learning is given by
where \mathbf \equiv k(\mathbf,\mathbf) is the kernel matrix with entries \mathbf_ = k(\mathbf_i,\mathbf_j), \mathbf = ()^\top, and \mathbf = ()^\top. We will see how this estimator can be derived both from a regularization and a Bayesian perspective.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Bayesian interpretation of kernel regularization」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.